situational context
Doing Things with Words: Rethinking Theory of Mind Simulation in Large Language Models
Lombardi, Agnese, Lenci, Alessandro
Language is fundamental to human cooperation, facilitating not only the exchange of information but also the coordination of actions through shared interpretations of situational contexts. This study explores whether the Generative Agent-Based Model (GABM) Concordia can effectively model Theory of Mind (ToM) within simulated real-world environments. Specifically, we assess whether this framework successfully simulates ToM abilities and whether GPT-4 can perform tasks by making genuine inferences from social context, rather than relying on linguistic memorization. Our findings reveal a critical limitation: GPT-4 frequently fails to select actions based on belief attribution, suggesting that apparent ToM-like abilities observed in previous studies may stem from shallow statistical associations rather than true reasoning. Additionally, the model struggles to generate coherent causal effects from agent actions, exposing difficulties in processing complex social interactions. These results challenge current statements about emergent ToM-like capabilities in LLMs and highlight the need for more rigorous, action-based evaluation frameworks.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- Europe > Italy > Sardinia > Cagliari (0.04)
Finding Uncommon Ground: A Human-Centered Model for Extrospective Explanations
Spillner, Laura, Zargham, Nima, Pomarlan, Mihai, Porzel, Robert, Malaka, Rainer
The need for explanations in AI has, by and large, been driven by the desire to increase the transparency of black-box machine learning models. However, such explanations, which focus on the internal mechanisms that lead to a specific output, are often unsuitable for non-experts. To facilitate a human-centered perspective on AI explanations, agents need to focus on individuals and their preferences as well as the context in which the explanations are given. This paper proposes a personalized approach to explanation, where the agent tailors the information provided to the user based on what is most likely pertinent to them. We propose a model of the agent's worldview that also serves as a personal and dynamic memory of its previous interactions with the same user, based on which the artificial agent can estimate what part of its knowledge is most likely new information to the user.
- Europe > Germany > Bremen > Bremen (0.14)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- (4 more...)
Situated Haptic Interaction: Exploring the Role of Context in Affective Perception of Robotic Touch
Affective interaction is not merely about recognizing emotions; it is an embodied, situated process shaped by context and co-created through interaction. In affective computing, the role of haptic feedback within dynamic emotional exchanges remains underexplored. This study investigates how situational emotional cues influence the perception and interpretation of haptic signals given by a robot. In a controlled experiment, 32 participants watched video scenarios in which a robot experienced either positive actions (such as being kissed), negative actions (such as being slapped) or neutral actions. After each video, the robot conveyed its emotional response through haptic communication, delivered via a wearable vibration sleeve worn by the participant. Participants rated the robot's emotional state-its valence (positive or negative) and arousal (intensity)-based on the video, the haptic feedback, and the combination of the two. The study reveals a dynamic interplay between visual context and touch. Participants' interpretation of haptic feedback was strongly shaped by the emotional context of the video, with visual context often overriding the perceived valence of the haptic signal. Negative haptic cues amplified the perceived valence of the interaction, while positive cues softened it. Furthermore, haptics override the participants' perception of arousal of the video. Together, these results offer insights into how situated haptic feedback can enrich affective human-robot interaction, pointing toward more nuanced and embodied approaches to emotional communication with machines.
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
PersonaLens: A Benchmark for Personalization Evaluation in Conversational AI Assistants
Zhao, Zheng, Vania, Clara, Kayal, Subhradeep, Khan, Naila, Cohen, Shay B., Yilmaz, Emine
Large language models (LLMs) have advanced conversational AI assistants. However, systematically evaluating how well these assistants apply personalization--adapting to individual user preferences while completing tasks--remains challenging. Existing personalization benchmarks focus on chit-chat, non-conversational tasks, or narrow domains, failing to capture the complexities of personalized task-oriented assistance. To address this, we introduce PersonaLens, a comprehensive benchmark for evaluating personalization in task-oriented AI assistants. Our benchmark features diverse user profiles equipped with rich preferences and interaction histories, along with two specialized LLM-based agents: a user agent that engages in realistic task-oriented dialogues with AI assistants, and a judge agent that employs the LLM-as-a-Judge paradigm to assess personalization, response quality, and task success. Through extensive experiments with current LLM assistants across diverse tasks, we reveal significant variability in their personalization capabilities, providing crucial insights for advancing conversational AI systems.
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- Europe > Austria > Vienna (0.14)
- Asia > Thailand > Bangkok > Bangkok (0.04)
- (16 more...)
- Leisure & Entertainment (1.00)
- Media > Film (0.93)
- Education (0.92)
- Consumer Products & Services (0.67)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.93)
Factors Impacting the Quality of User Answers on Smartphones
So far, most research investigating the predictability of human behavior, such as mobility and social interactions, has focused mainly on the exploitation of sensor data. However, sensor data can be difficult to capture the subjective motivations behind the individuals' behavior. Understanding personal context (e.g., where one is and what they are doing) can greatly increase predictability. The main limitation is that human input is often missing or inaccurate. The goal of this paper is to identify factors that influence the quality of responses when users are asked about their current context. We find that two key factors influence the quality of responses: user reaction time and completion time. These factors correlate with various exogenous causes (e.g., situational context, time of day) and endogenous causes (e.g., procrastination attitude, mood). In turn, we study how these two factors impact the quality of responses.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > Italy > Trentino-Alto Adige/Südtirol > Trentino Province > Trento (0.04)
- (2 more...)
- Information Technology > Communications > Mobile (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
Personalized State Anxiety Detection: An Empirical Study with Linguistic Biomarkers and A Machine Learning Pipeline
Wang, Zhiyuan, Tang, Mingyue, Larrazabal, Maria A., Toner, Emma R., Rucker, Mark, Wu, Congyu, Teachman, Bethany A., Boukhechba, Mehdi, Barnes, Laura E.
Individuals high in social anxiety symptoms often exhibit elevated state anxiety in social situations. Research has shown it is possible to detect state anxiety by leveraging digital biomarkers and machine learning techniques. However, most existing work trains models on an entire group of participants, failing to capture individual differences in their psychological and behavioral responses to social contexts. To address this concern, in Study 1, we collected linguistic data from N=35 high socially anxious participants in a variety of social contexts, finding that digital linguistic biomarkers significantly differ between evaluative vs. non-evaluative social contexts and between individuals having different trait psychological symptoms, suggesting the likely importance of personalized approaches to detect state anxiety. In Study 2, we used the same data and results from Study 1 to model a multilayer personalized machine learning pipeline to detect state anxiety that considers contextual and individual differences. This personalized model outperformed the baseline F1-score by 28.0%. Results suggest that state anxiety can be more accurately detected with personalized machine learning approaches, and that linguistic biomarkers hold promise for identifying periods of state anxiety in an unobtrusive way.
- North America > United States > Virginia > Albemarle County > Charlottesville (0.14)
- North America > United States > New York > Broome County > Binghamton (0.04)
- Oceania > Australia (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
How chatbots will become better at listening
Digital assistants are increasingly making their way into our everyday lives, not only via mobile devices, but also through home devices, cars, and more. As their scope widens, the technology powering them must evolve and deepen to fit people's ever-growing expectations and pace of life. With the intelligence of these assistants maturing, businesses will see new uses that go beyond today's simple question-and-answer interactions (Question: What is today's weather? Answer: Sunny and 64 degrees Fahrenheit).This new generation of assistants will have implications beyond our original thinking -- but it all starts with the tech. Voice artificial intelligence (Voice AI) involves the application of artificial intelligence techniques to voice-based interactions, enabling users to converse with systems in a flexible and collaborative way.
How chatbots will become better at listening
Digital assistants are increasingly making their way into our everyday lives, not only via mobile devices, but also through home devices, cars, and more. As their scope widens, the technology powering them must evolve and deepen to fit people's ever-growing expectations and pace of life. With the intelligence of these assistants maturing, businesses will see new uses that go beyond today's simple question and answer interactions (Question: What is today's weather? Answer: Sunny and 64 degrees Fahrenheit).This new generation of assistants will have implications beyond our original thinking -- but it all starts with the tech. Voice artificial intelligence (Voice AI) involves the application of artificial intelligence techniques to voice-based interactions, enabling users to converse with systems in a flexible and collaborative way.
How chatbots will become better at listening
Digital assistants are increasingly making their way into our everyday lives, not only via mobile devices, but also in home devices, cars and more. As their scope widens, the technology powering them must evolve and deepen to fit people's ever-growing needs and pace of life. With the intelligence of these assistants maturing, businesses will see new uses that go beyond today's simple question and answer interactions (Question: What is today's weather? Answer: Sunny and 64 degrees Fahrenheit).This new generation of assistants will have implications beyond our original thinking -- but it all starts with the tech. Voice artificial intelligence (Voice AI) involves the application of artificial intelligence techniques to voice-based interactions, enabling users to converse with systems in a flexible and collaborative way.
Modeling Situated Conversations for a Child-Care Robot Using Wearable Devices
On, Kyoung-Woon (Seoul National University) | Kim, Eun-Sol (Seoul National University) | Zhang, Byoung-Tak (Seoul National University)
How can robots fluently communicate with humans and have context-preserving conversation? It is the most momentous and crucial problem in robotics research, especially for service robots such as child-care robots. Here, we aim to develop a situated conversation system for child-care robots. The conversation system considers the current context between robots and children as well as the situation the child is in. The system consists of two parts. The first part tries to understand the context. This part uses the embedded sensors of robots to understand the context and wearable sensors of the child for getting information of the situation the child is in. The second part is to generate the situated conversation. In terms of the model, we designed a hierarchical Bayesian Network for the first part and a Hypernetwork model is used for the second part. We illustrate the application of communication with a child in a child-care service robots scenario. For this application, we collect wearable sensors’ data from the child and mother-child conversation data in daily life. Finally, we discuss our results and future works.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > South Korea > Seoul > Seoul (0.04)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Communications > Networks > Sensor Networks (0.37)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.34)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.34)